33 research outputs found

    Reconfigurable FPGA Architecture for Computer Vision Applications in Smart Camera Networks

    Get PDF
    Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In this vision, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely update the application parameter and the performed computer vision application at run­time. In this respect, we present a novel SCN node architecture based on a device in which a microcontroller manage all the network functionality as well as the remote configuration, while an FPGA implements all the necessary module of a full computer vision pipeline. In this work the envisioned architecture is first detailed in general terms, then a real implementation is presented to show the feasibility and the benefits of the proposed solution. Finally, performance evaluation results underline the potential of an hardware software codesign approach in reaching flexibility and reduced processing time

    Design Productivity of a High Level Synthesis Compiler versus HDL

    Get PDF
    International audienceThe complexity of hardware systems is currently growing faster than the productivity of system designers and programmers. This phenomenon is called Design Productivity Gap and results in inflating design costs. In this paper, the notion of Design Productivity is precisely defined, as well as a metric to assess the Design Productivity of a High-Level Synthesis (HLS) method versus a manual hardware description. The proposed Design Productivity metric evaluates the trade-off between design efficiency and implementation quality. The method is generic enough to be used for comparing several HLS methods of different natures, opening opportunities for further progress in Design Productivity. To demonstrate the Design Productivity evaluation method, an HLS compiler based on the CAPH language is compared to manual VHDL writing. The causes that make VHDL lower level than CAPH are discussed. Versions of the sub-pixel interpolation filter from the MPEG HEVC standard are implemented and a design productivity gain of 2.3× in average is measured for the CAPH HLS method. It results from an average gain in design time of 4.4× and an average loss in quality of 1.9×

    DreamCAM: A FPGA-based platform for smart camera networks

    Get PDF
    International audience—The main challenges in smart camera networks come from the limited capacity of network communications. Indeed, wireless protocols such as the IEEE 802.15.4 protocol target low data rate, low power consumption and low cost wireless networking in order to fit the requirements of sensor networks. Since nodes more and more often integrate image sensors, network bandwidth has become a strong limiting factor in application deployment. This means that data must be processed at the node level before being sent on the network. In this context, FPGA-based platforms, supporting massive data parallelism, offer large opportunities for on-board processing. We present in this paper our FPGA-based smart camera platform, called DreamCam, which is able to autonomously exchange processed information on an Ethernet network

    Distributed FPGA-based smart camera architecture for computer vision applications

    Get PDF
    International audienceSmart camera networks (SCN) raise challenging issues in many fields of research, including vision processing, communication protocols, distributed algorithms or power management. Furthermore, application logic in SCN is not centralized but spread among network nodes meaning that each node must have to process images to extract significant features, and aggregate data to understand the surrounding environment. In this context, smart camera have first embedded general purpose processor (GPP) for image processing. Since image resolution increases, GPPs have reached their limit to maintain real-time processing constraint. More recently, FPGA-based platforms have been studied for their massive parallelism capabilities. This paper present our new FPGA-based smart camera platform supporting cooperation between nodes and run-time updatable image processing. The architecture is based on a full reconfigurable pipeline driven by a softcore

    HOG-Dot: A Parallel Kernel-Based Gradient Extraction for Embedded Image Processing

    Get PDF
    International audienceIn this paper we propose HOG-Dot, a method for the direct computation of the polar image gradients coordinates from the pixels values. The proposed algorithm, to be used as the first step of the Histogram of Oriented Gradient (HOG) pipeline, approximates the exact gradient with its projection onto a versor chosen among the projection plane set. Instead of non linear computations, the HOG-Dot method exploits linear operations while introducing a bounded approximation error with respect to other HOG approaches, thus resulting a more suitable solution for embedded devices. Concerning the state of the art, it also achieves improved accuracy with the mathematical spatial gradient formulation

    Parallel Image Gradient Extraction Core For FPGA-based Smart Cameras

    Get PDF
    International audienceOne of the biggest efforts in designing pervasive Smart Camera Networks (SCNs) is the implementation of complex and computationally intensive computer vision algorithms on resource constrained embedded devices. For low-level processing FPGA devices are excellent candidates because they support massive and fine grain data parallelism with high data throughput. However, if FPGAs offers a way to meet the stringent constraints of real-time execution, their exploitation often require significant algorithmic reformulations. In this paper, we propose a reformulation of a kernel-based gradient computation module specially suited to FPGA implementations. This resulting algorithm operates on-the-fly, without the need of video buffers and delivers a constant throughput. It has been tested and used as the first stage of an application performing extraction of Histograms of Oriented Gradients (HOG). Evaluation shows that its performance and low memory requirement perfectly matches low cost and memory constrained embedded devices

    Models of Architecture: Reproducible Efficiency Evaluation for Signal Processing Systems

    Get PDF
    International audienceThe current trend in high performance and embedded signal processing consists of designing increasingly complex heterogeneous hardware architectures with non-uniform communication resources. In order to take hardware and software design decisions, early evaluations of the system non-functional properties are needed. These evaluations of system efficiency require high-level information on both the algorithms and the architecture. In this paper, we define the notion of Model of Architecture (MoA) and study the combination of a Model of Computation (MoC) and an MoA to provide a design space exploration environment for the study of the algorithmic and architectural choices. A cost is computed from the mapping of an application, represented by a model conforming a MoC onto an architecture represented by a model conforming an MoA. The cost is composed of a processing-related part and a communication-related part. It is an abstract scalar value to be minimized and can represent any non-functional requirement of a system such as memory, energy, throughput or latency

    Models of Architecture: Reproducible Efficiency Evaluation for Signal Processing Systems

    Get PDF
    International audienceThe current trend in high performance and embedded signal processing consists of designing increasingly complex heterogeneous hardware architectures with non-uniform communication resources. In order to take hardware and software design decisions, early evaluations of the system non-functional properties are needed. These evaluations of system efficiency require high-level information on both the algorithms and the architecture. In this paper, we define the notion of Model of Architecture (MoA) and study the combination of a Model of Computation (MoC) and an MoA to provide a design space exploration environment for the study of the algorithmic and architectural choices. A cost is computed from the mapping of an application, represented by a model conforming a MoC onto an architecture represented by a model conforming an MoA. The cost is composed of a processing-related part and a communication-related part. It is an abstract scalar value to be minimized and can represent any non-functional requirement of a system such as memory, energy, throughput or latency

    Models of Architecture

    Get PDF
    The current trend in high performance and embedded computing consists of designing increasingly complex heterogeneous hardware architectures with non-uniform communication resources. In order to take hardware and software design decisions, early evaluations of the system non-functional properties are needed. These evaluations of system efficiency require high-level information on both the algorithms and the architecture. In state of the art Model Driven Engineering (MDE) methods, different communities have developed custom architecture models associated to languages of substantial complexity. This fact contrasts with Models of Computation (MoCs) that provide abstract representations of an algorithm behavior as well as tool interoperability.In this report, we define the notion of Model of Architecture (MoA) and study the combination of a MoC and an MoA to provide a design space exploration environment for the study of the algorithmic and architectural choices. An MoA provides reproducible cost computation for evaluating the efficiency of a system. A new MoA called Linear System-Level Architecture Model (LSLA) is introduced and compared to state of the art models. LSLA aims at representing hardware efficiency with a linear model. The computed cost results from the mapping of an application, represented by a model conforming a MoC on an architecture represented by a model conforming an MoA. The cost is composed of a processing-related part and a communication-related part. It is an abstract scalar value to be minimized and can represent any non-functional requirement of a system such as memory, energy, throughput or latency

    Factor Structure, Validity, and Reliability of the STarT Back Screening Tool in Italian Obese and Non-obese Patients With Low Back Pain

    Get PDF
    Background: The STarT Back Screening Tool (SBST) is a self-report questionnaire developed for prognostic purposes which evaluates risk factors for disability outcomes in patients with chronic low back pain. Previous studies found that its use enables to provide a cost-effective stratified care. However, its dimensionality has been assessed only using exploratory approaches, and reports on its psychometric properties are conflicting.Objective: The objective of this study was to assess the factorial structure and the psychometric properties of the Italian version of the STarT Back Screening Tool (SBST).Materials and Methods: Patients with medical diagnosis of low back pain were enrolled from a rehabilitation unit of a tertiary care hospital specialized in obesity care (Sample 1) and from a clinical internship center of an osteopathic training institute (Sample 2). At baseline and after 7 days patients were asked to fill a battery of self-report questionnaires. The factorial structure, internal consistency, test-retest reliability, and construct validity of the SBST were assessed.Results: One hundred forty-six patients were enrolled (62 from Sample 1 and 84 from Sample 2). The confirmatory factor analysis showed that the fit of the original two-correlated factors model was adequate (CFI = 0.98, TLI = 0.99, RMSEA = 0.03). Cronbach's alpha of the total scale (alpha = 0.64) and of the subscales (physical subscale alpha = 0.55; psychological subscale alpha = 0.61) was below the cutoffs, partly because of the low correlation of item 2 with the other items. Test-retest reliability was adequate (ICC = 0.84). The SBST had moderate correlations with comparisons questionnaires, except for the Roland-Morris Disability Questionnaire, which had a high correlation (r = 0.65).Discussion: The SBST has adequate psychometric properties and can be used to assess prognostic factors for disability in low back pain patients
    corecore